Search Results

Documents authored by Kratsch, Stefan


Document
Approximate Turing Kernelization and Lower Bounds for Domination Problems

Authors: Stefan Kratsch and Pascal Kunz

Published in: LIPIcs, Volume 285, 18th International Symposium on Parameterized and Exact Computation (IPEC 2023)


Abstract
An α-approximate polynomial Turing kernelization is a polynomial-time algorithm that computes an (α c)-approximate solution for a parameterized optimization problem when given access to an oracle that can compute c-approximate solutions to instances with size bounded by a polynomial in the parameter. Hols et al. [ESA 2020] showed that a wide array of graph problems admit a (1+ε)-approximate polynomial Turing kernelization when parameterized by the treewidth of the graph and left open whether Dominating Set also admits such a kernelization. We show that Dominating Set and several related problems parameterized by treewidth do not admit constant-factor approximate polynomial Turing kernelizations, even with respect to the much larger parameter vertex cover number, under certain reasonable complexity assumptions. On the positive side, we show that all of them do have a (1+ε)-approximate polynomial Turing kernelization for every ε > 0 for the joint parameterization by treewidth and maximum degree, a parameter which generalizes cutwidth, for example.

Cite as

Stefan Kratsch and Pascal Kunz. Approximate Turing Kernelization and Lower Bounds for Domination Problems. In 18th International Symposium on Parameterized and Exact Computation (IPEC 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 285, pp. 32:1-32:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.IPEC.2023.32,
  author =	{Kratsch, Stefan and Kunz, Pascal},
  title =	{{Approximate Turing Kernelization and Lower Bounds for Domination Problems}},
  booktitle =	{18th International Symposium on Parameterized and Exact Computation (IPEC 2023)},
  pages =	{32:1--32:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-305-8},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{285},
  editor =	{Misra, Neeldhara and Wahlstr\"{o}m, Magnus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2023.32},
  URN =		{urn:nbn:de:0030-drops-194516},
  doi =		{10.4230/LIPIcs.IPEC.2023.32},
  annote =	{Keywords: Approximate Turing kernelization, approximation lower bounds, exponential-time hypothesis, dominating set, capacitated dominating, connected dominating set, independent dominating set, treewidth, vertex cover number}
}
Document
Tight Algorithms for Connectivity Problems Parameterized by Clique-Width

Authors: Falko Hegerfeld and Stefan Kratsch

Published in: LIPIcs, Volume 274, 31st Annual European Symposium on Algorithms (ESA 2023)


Abstract
The complexity of problems involving global constraints is usually much more difficult to understand than the complexity of problems only involving local constraints. In the realm of graph problems, connectivity constraints are a natural form of global constraints. We study connectivity problems from a fine-grained parameterized perspective. In a breakthrough result, Cygan et al. (TALG 2022) first obtained Monte-Carlo algorithms with single-exponential running time α^{tw} n^𝒪(1) for connectivity problems parameterized by treewidth by introducing the cut-and-count-technique, which reduces many connectivity problems to locally checkable counting problems. Furthermore, the obtained bases α were shown to be optimal under the Strong Exponential-Time Hypothesis (SETH). However, since only sparse graphs may admit small treewidth, we lack knowledge of the fine-grained complexity of connectivity problems with respect to dense structure. The most popular graph parameter to measure dense structure is arguably clique-width, which intuitively measures how easily a graph can be constructed by repeatedly adding bicliques. Bergougnoux and Kanté (TCS 2019) have shown, using the rank-based approach, that also parameterized by clique-width many connectivity problems admit single-exponential algorithms. Unfortunately, the obtained running times are far from optimal under SETH. We show how to obtain optimal running times parameterized by clique-width for two benchmark connectivity problems, namely Connected Vertex Cover and Connected Dominating Set. These are the first tight results for connectivity problems with respect to clique-width and these results are obtained by developing new algorithms based on the cut-and-count-technique and novel lower bound constructions. Precisely, we show that there exist one-sided error Monte-Carlo algorithms that given a k-clique-expression solve - Connected Vertex Cover in time 6^k n^𝒪(1), and - Connected Dominating Set in time 5^k n^𝒪(1). Both results are shown to be tight under SETH.

Cite as

Falko Hegerfeld and Stefan Kratsch. Tight Algorithms for Connectivity Problems Parameterized by Clique-Width. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 59:1-59:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{hegerfeld_et_al:LIPIcs.ESA.2023.59,
  author =	{Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{Tight Algorithms for Connectivity Problems Parameterized by Clique-Width}},
  booktitle =	{31st Annual European Symposium on Algorithms (ESA 2023)},
  pages =	{59:1--59:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-295-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{274},
  editor =	{G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.59},
  URN =		{urn:nbn:de:0030-drops-187124},
  doi =		{10.4230/LIPIcs.ESA.2023.59},
  annote =	{Keywords: Parameterized Complexity, Connectivity, Clique-width, Cut\&Count, Lower Bound}
}
Document
Tight Algorithmic Applications of Clique-Width Generalizations

Authors: Vera Chekan and Stefan Kratsch

Published in: LIPIcs, Volume 272, 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)


Abstract
In this work, we study two natural generalizations of clique-width introduced by Martin Fürer. Multi-clique-width (mcw) allows every vertex to hold multiple labels [ITCS 2017], while for fusion-width (fw) we have a possibility to merge all vertices of a certain label [LATIN 2014]. Fürer has shown that both parameters are upper-bounded by treewidth thus making them more appealing from an algorithmic perspective than clique-width and asked for applications of these parameters for problem solving. First, we determine the relation between these two parameters by showing that mcw ≤ fw + 1. Then we show that when parameterized by multi-clique-width, many problems (e.g., Connected Dominating Set) admit algorithms with the same running time as for clique-width despite the exponential gap between these two parameters. For some problems (e.g., Hamiltonian Cycle) we show an analogous result for fusion-width: For this we present an alternative view on fusion-width by introducing so-called glue-expressions which might be interesting on their own. All algorithms obtained in this work are tight up to (Strong) Exponential Time Hypothesis.

Cite as

Vera Chekan and Stefan Kratsch. Tight Algorithmic Applications of Clique-Width Generalizations. In 48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 272, pp. 35:1-35:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{chekan_et_al:LIPIcs.MFCS.2023.35,
  author =	{Chekan, Vera and Kratsch, Stefan},
  title =	{{Tight Algorithmic Applications of Clique-Width Generalizations}},
  booktitle =	{48th International Symposium on Mathematical Foundations of Computer Science (MFCS 2023)},
  pages =	{35:1--35:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-292-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{272},
  editor =	{Leroux, J\'{e}r\^{o}me and Lombardy, Sylvain and Peleg, David},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2023.35},
  URN =		{urn:nbn:de:0030-drops-185699},
  doi =		{10.4230/LIPIcs.MFCS.2023.35},
  annote =	{Keywords: Parameterized complexity, connectivity problems, clique-width}
}
Document
Tight Bounds for Connectivity Problems Parameterized by Cutwidth

Authors: Narek Bojikian, Vera Chekan, Falko Hegerfeld, and Stefan Kratsch

Published in: LIPIcs, Volume 254, 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)


Abstract
In this work we start the investigation of tight complexity bounds for connectivity problems parameterized by cutwidth assuming the Strong Exponential-Time Hypothesis (SETH). Van Geffen et al. [Bas A. M. van Geffen et al., 2020] posed this question for Odd Cycle Transversal and Feedback Vertex Set. We answer it for these two and four further problems, namely Connected Vertex Cover, Connected Dominating Set, Steiner Tree, and Connected Odd Cycle Transversal. For the latter two problems it sufficed to prove lower bounds that match the running time inherited from parameterization by treewidth; for the others we provide faster algorithms than relative to treewidth and prove matching lower bounds. For upper bounds we first extend the idea of Groenland et al. [Carla Groenland et al., 2022] to solve what we call coloring-like problems. Such problems are defined by a symmetric matrix M over 𝔽₂ indexed by a set of colors. The goal is to count the number (modulo some prime p) of colorings of a graph such that M has a 1-entry if indexed by the colors of the end-points of any edge. We show that this problem can be solved faster if M has small rank over 𝔽_p. We apply this result to get our upper bounds for CVC and CDS. The upper bounds for OCT and FVS use a subdivision trick to get below the bounds that matrix rank would yield.

Cite as

Narek Bojikian, Vera Chekan, Falko Hegerfeld, and Stefan Kratsch. Tight Bounds for Connectivity Problems Parameterized by Cutwidth. In 40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 254, pp. 14:1-14:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{bojikian_et_al:LIPIcs.STACS.2023.14,
  author =	{Bojikian, Narek and Chekan, Vera and Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{Tight Bounds for Connectivity Problems Parameterized by Cutwidth}},
  booktitle =	{40th International Symposium on Theoretical Aspects of Computer Science (STACS 2023)},
  pages =	{14:1--14:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-266-2},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{254},
  editor =	{Berenbrink, Petra and Bouyer, Patricia and Dawar, Anuj and Kant\'{e}, Mamadou Moustapha},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2023.14},
  URN =		{urn:nbn:de:0030-drops-176667},
  doi =		{10.4230/LIPIcs.STACS.2023.14},
  annote =	{Keywords: Parameterized complexity, connectivity problems, cutwidth}
}
Document
Towards Exact Structural Thresholds for Parameterized Complexity

Authors: Falko Hegerfeld and Stefan Kratsch

Published in: LIPIcs, Volume 249, 17th International Symposium on Parameterized and Exact Computation (IPEC 2022)


Abstract
Parameterized complexity seeks to optimally use input structure to obtain faster algorithms for NP-hard problems. This has been most successful for graphs of low treewidth, i.e., graphs decomposable by small separators: Many problems admit fast algorithms relative to treewidth and many of them are optimal under the Strong Exponential-Time Hypothesis (SETH). Fewer such results are known for more general structure such as low clique-width (decomposition by large and dense but structured separators) and more restrictive structure such as low deletion distance to some sparse graph class. Despite these successes, such results remain "islands" within the realm of possible structure. Rather than adding more islands, we seek to determine the transitions between them, that is, we aim for structural thresholds where the complexity increases as input structure becomes more general. Going from deletion distance to treewidth, is a single deletion set to a graph with simple components enough to yield the same lower bound as for treewidth or does it take many disjoint separators? Going from treewidth to clique-width, how much more density entails the same complexity as clique-width? Conversely, what is the most restrictive structure that yields the same lower bound? For treewidth, we obtain both refined and new lower bounds that apply already to graphs with a single separator X such that G-X has treewidth at most r = 𝒪(1), while G has treewidth |X|+𝒪(1). We rule out algorithms running in time 𝒪^*((r+1-ε)^k) for Deletion to r-Colorable parameterized by k = |X|; this implies the same lower bound relative to treedepth and (hence) also to treewidth. It specializes to 𝒪^*((3-ε)^k) for Odd Cycle Transversal where tw(G-X) ≤ r = 2 is best possible. For clique-width, an extended version of the above reduction rules out time 𝒪^*((4-ε)^k), where X is allowed to be a possibly large separator consisting of k (true) twinclasses, while the treewidth of G - X remains r; this is proved also for the more general Deletion to r-Colorable and it implies the same lower bound relative to clique-width. Further results complement what is known for Vertex Cover, Dominating Set and Maximum Cut. All lower bounds are matched by existing and newly designed algorithms.

Cite as

Falko Hegerfeld and Stefan Kratsch. Towards Exact Structural Thresholds for Parameterized Complexity. In 17th International Symposium on Parameterized and Exact Computation (IPEC 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 249, pp. 17:1-17:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@InProceedings{hegerfeld_et_al:LIPIcs.IPEC.2022.17,
  author =	{Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{Towards Exact Structural Thresholds for Parameterized Complexity}},
  booktitle =	{17th International Symposium on Parameterized and Exact Computation (IPEC 2022)},
  pages =	{17:1--17:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-260-0},
  ISSN =	{1868-8969},
  year =	{2022},
  volume =	{249},
  editor =	{Dell, Holger and Nederlof, Jesper},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2022.17},
  URN =		{urn:nbn:de:0030-drops-173734},
  doi =		{10.4230/LIPIcs.IPEC.2022.17},
  annote =	{Keywords: Parameterized complexity, lower bound, vertex cover, odd cycle transversal, SETH, modulator, treedepth, cliquewidth}
}
Document
Approximate Turing Kernelization for Problems Parameterized by Treewidth

Authors: Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse

Published in: LIPIcs, Volume 173, 28th Annual European Symposium on Algorithms (ESA 2020)


Abstract
We extend the notion of lossy kernelization, introduced by Lokshtanov et al. [STOC 2017], to approximate Turing kernelization. An α-approximate Turing kernel for a parameterized optimization problem is a polynomial-time algorithm that, when given access to an oracle that outputs c-approximate solutions in 𝒪(1) time, obtains an α ⋅ c-approximate solution to the considered problem, using calls to the oracle of size at most f(k) for some function f that only depends on the parameter. Using this definition, we show that Independent Set parameterized by treewidth 𝓁 has a (1+ε)-approximate Turing kernel with 𝒪(𝓁²/ε) vertices, answering an open question posed by Lokshtanov et al. [STOC 2017]. Furthermore, we give (1+ε)-approximate Turing kernels for the following graph problems parameterized by treewidth: Vertex Cover, Edge Clique Cover, Edge-Disjoint Triangle Packing and Connected Vertex Cover. We generalize the result for Independent Set and Vertex Cover, by showing that all graph problems that we will call friendly admit (1+ε)-approximate Turing kernels of polynomial size when parameterized by treewidth. We use this to obtain approximate Turing kernels for Vertex-Disjoint H-packing for connected graphs H, Clique Cover, Feedback Vertex Set and Edge Dominating Set.

Cite as

Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse. Approximate Turing Kernelization for Problems Parameterized by Treewidth. In 28th Annual European Symposium on Algorithms (ESA 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 173, pp. 60:1-60:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.ESA.2020.60,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan and Pieterse, Astrid},
  title =	{{Approximate Turing Kernelization for Problems Parameterized by Treewidth}},
  booktitle =	{28th Annual European Symposium on Algorithms (ESA 2020)},
  pages =	{60:1--60:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-162-7},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{173},
  editor =	{Grandoni, Fabrizio and Herman, Grzegorz and Sanders, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2020.60},
  URN =		{urn:nbn:de:0030-drops-129261},
  doi =		{10.4230/LIPIcs.ESA.2020.60},
  annote =	{Keywords: Approximation, Turing kernelization, Graph problems, Treewidth}
}
Document
Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space

Authors: Falko Hegerfeld and Stefan Kratsch

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
A breakthrough result of Cygan et al. (FOCS 2011) showed that connectivity problems parameterized by treewidth can be solved much faster than the previously best known time ?^*(2^{?(twlog tw)}). Using their inspired Cut&Count technique, they obtained ?^*(α^tw) time algorithms for many such problems. Moreover, they proved these running times to be optimal assuming the Strong Exponential-Time Hypothesis. Unfortunately, like other dynamic programming algorithms on tree decompositions, these algorithms also require exponential space, and this is widely believed to be unavoidable. In contrast, for the slightly larger parameter called treedepth, there are already several examples of matching the time bounds obtained for treewidth, but using only polynomial space. Nevertheless, this has remained open for connectivity problems. In the present work, we close this knowledge gap by applying the Cut&Count technique to graphs of small treedepth. While the general idea is unchanged, we have to design novel procedures for counting consistently cut solution candidates using only polynomial space. Concretely, we obtain time ?^*(3^d) and polynomial space for Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree on graphs of treedepth d. Similarly, we obtain time ?^*(4^d) and polynomial space for Connected Dominating Set and Connected Odd Cycle Transversal.

Cite as

Falko Hegerfeld and Stefan Kratsch. Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 29:1-29:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hegerfeld_et_al:LIPIcs.STACS.2020.29,
  author =	{Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{Solving Connectivity Problems Parameterized by Treedepth in Single-Exponential Time and Polynomial Space}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{29:1--29:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.29},
  URN =		{urn:nbn:de:0030-drops-118907},
  doi =		{10.4230/LIPIcs.STACS.2020.29},
  annote =	{Keywords: Parameterized Complexity, Connectivity, Treedepth, Cut\&Count, Polynomial Space}
}
Document
Elimination Distances, Blocking Sets, and Kernels for Vertex Cover

Authors: Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
The Vertex Cover problem plays an essential role in the study of polynomial kernelization in parameterized complexity, i.e., the study of provable and efficient preprocessing for NP-hard problems. Motivated by the great variety of positive and negative results for kernelization for Vertex Cover subject to different parameters and graph classes, we seek to unify and generalize them using so-called blocking sets. A blocking set is a set of vertices such that no optimal vertex cover contains all vertices in the blocking set, and the study of minimal blocking sets played implicit and explicit roles in many existing results. We show that in the most-studied setting, parameterized by the size of a deletion set to a specified graph class ?, bounded minimal blocking set size is necessary but not sufficient to get a polynomial kernelization. Under mild technical assumptions, bounded minimal blocking set size is showed to allow an essentially tight efficient reduction in the number of connected components. We then determine the exact maximum size of minimal blocking sets for graphs of bounded elimination distance to any hereditary class ?, including the case of graphs of bounded treedepth. We get similar but not tight bounds for certain non-hereditary classes ?, including the class ?_{LP} of graphs where integral and fractional vertex cover size coincide. These bounds allow us to derive polynomial kernels for Vertex Cover parameterized by the size of a deletion set to graphs of bounded elimination distance to, e.g., forest, bipartite, or ?_{LP} graphs.

Cite as

Eva-Maria C. Hols, Stefan Kratsch, and Astrid Pieterse. Elimination Distances, Blocking Sets, and Kernels for Vertex Cover. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 36:1-36:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.STACS.2020.36,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan and Pieterse, Astrid},
  title =	{{Elimination Distances, Blocking Sets, and Kernels for Vertex Cover}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{36:1--36:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.36},
  URN =		{urn:nbn:de:0030-drops-118974},
  doi =		{10.4230/LIPIcs.STACS.2020.36},
  annote =	{Keywords: Vertex Cover, kernelization, blocking sets, elimination distance, structural parameters}
}
Document
Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths

Authors: Stefan Kratsch and Florian Nelles

Published in: LIPIcs, Volume 154, 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)


Abstract
Computing all-pairs shortest paths is a fundamental and much-studied problem with many applications. Unfortunately, despite intense study, there are still no significantly faster algorithms for it than the ?(n³) time algorithm due to Floyd and Warshall (1962). Somewhat faster algorithms exist for the vertex-weighted version if fast matrix multiplication may be used. Yuster (SODA 2009) gave an algorithm running in time ?(n^2.842), but no combinatorial, truly subcubic algorithm is known. Motivated by the recent framework of efficient parameterized algorithms (or "FPT in P"), we investigate the influence of the graph parameters clique-width (cw) and modular-width (mw) on the running times of algorithms for solving ALL-PAIRS SHORTEST PATHS. We obtain efficient (and combinatorial) parameterized algorithms on non-negative vertex-weighted graphs of times ?(cw²n²), resp. ?(mw²n + n²). If fast matrix multiplication is allowed then the latter can be improved to ?(mw^{1.842} n + n²) using the algorithm of Yuster as a black box. The algorithm relative to modular-width is adaptive, meaning that the running time matches the best unparameterized algorithm for parameter value mw equal to n, and they outperform them already for mw ∈ ?(n^{1 - ε}) for any ε > 0.

Cite as

Stefan Kratsch and Florian Nelles. Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths. In 37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 154, pp. 38:1-38:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.STACS.2020.38,
  author =	{Kratsch, Stefan and Nelles, Florian},
  title =	{{Efficient Parameterized Algorithms for Computing All-Pairs Shortest Paths}},
  booktitle =	{37th International Symposium on Theoretical Aspects of Computer Science (STACS 2020)},
  pages =	{38:1--38:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-140-5},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{154},
  editor =	{Paul, Christophe and Bl\"{a}ser, Markus},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2020.38},
  URN =		{urn:nbn:de:0030-drops-118992},
  doi =		{10.4230/LIPIcs.STACS.2020.38},
  annote =	{Keywords: All-pairs shortest Paths, efficient parameterized Algorithms, parameterized Complexity, Clique-width, Modular-width}
}
Document
Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack

Authors: Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese

Published in: LIPIcs, Volume 144, 27th Annual European Symposium on Algorithms (ESA 2019)


Abstract
The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., (1+epsilon)-approximations in f(k,epsilon)n^O(1) time where k is some parameter of the input. The goal is to overcome lower bounds from either of the areas. We obtain the following results on parameterized approximability: - In the maximum independent set of rectangles problem (MISR) we are given a collection of n axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA'05]. The best-known polynomial-time approximation factor is O(log log n) [Chalermsook and Chuzhoy, SODA'09] and it admits a QPTAS [Adamaszek and Wiese, FOCS'13; Chuzhoy and Ene, FOCS'16]. Here we present a parameterized approximation scheme (PAS) for MISR, i.e. an algorithm that, for any given constant epsilon>0 and integer k>0, in time f(k,epsilon)n^g(epsilon), either outputs a solution of size at least k/(1+epsilon), or declares that the optimum solution has size less than k. - In the (2-dimensional) geometric knapsack problem (2DK) we are given an axis-aligned square knapsack and a collection of axis-aligned rectangles in the plane (items). Our goal is to translate a maximum cardinality subset of items into the knapsack so that the selected items do not overlap. In the version of 2DK with rotations (2DKR), we are allowed to rotate items by 90 degrees. Both variants are NP-hard, and the best-known polynomial-time approximation factor is 2+epsilon [Jansen and Zhang, SODA'04]. These problems admit a QPTAS for polynomially bounded item sizes [Adamaszek and Wiese, SODA'15]. We show that both variants are W[1]-hard. Furthermore, we present a PAS for 2DKR. For all considered problems, getting time f(k,epsilon)n^O(1), rather than f(k,epsilon)n^g(epsilon), would give FPT time f'(k)n^O(1) exact algorithms by setting epsilon=1/(k+1), contradicting W[1]-hardness. Instead, for each fixed epsilon>0, our PASs give (1+epsilon)-approximate solutions in FPT time. For both MISR and 2DKR our techniques also give rise to preprocessing algorithms that take n^g(epsilon) time and return a subset of at most k^g(epsilon) rectangles/items that contains a solution of size at least k/(1+epsilon) if a solution of size k exists. This is a special case of the recently introduced notion of a polynomial-size approximate kernelization scheme [Lokshtanov et al., STOC'17].

Cite as

Fabrizio Grandoni, Stefan Kratsch, and Andreas Wiese. Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack. In 27th Annual European Symposium on Algorithms (ESA 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 144, pp. 53:1-53:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{grandoni_et_al:LIPIcs.ESA.2019.53,
  author =	{Grandoni, Fabrizio and Kratsch, Stefan and Wiese, Andreas},
  title =	{{Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack}},
  booktitle =	{27th Annual European Symposium on Algorithms (ESA 2019)},
  pages =	{53:1--53:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-124-5},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{144},
  editor =	{Bender, Michael A. and Svensson, Ola and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2019.53},
  URN =		{urn:nbn:de:0030-drops-111741},
  doi =		{10.4230/LIPIcs.ESA.2019.53},
  annote =	{Keywords: parameterized approximation, parameterized intractability, independent set of rectangles, geometric knapsack}
}
Document
Track A: Algorithms, Complexity and Games
On Adaptive Algorithms for Maximum Matching

Authors: Falko Hegerfeld and Stefan Kratsch

Published in: LIPIcs, Volume 132, 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)


Abstract
In the fundamental Maximum Matching problem the task is to find a maximum cardinality set of pairwise disjoint edges in a given undirected graph. The fastest algorithm for this problem, due to Micali and Vazirani, runs in time O(sqrt{n}m) and stands unbeaten since 1980. It is complemented by faster, often linear-time, algorithms for various special graph classes. Moreover, there are fast parameterized algorithms, e.g., time O(km log n) relative to tree-width k, which outperform O(sqrt{n}m) when the parameter is sufficiently small. We show that the Micali-Vazirani algorithm, and in fact any algorithm following the phase framework of Hopcroft and Karp, is adaptive to beneficial input structure. We exhibit several graph classes for which such algorithms run in linear time O(n+m). More strongly, we show that they run in time O(sqrt{k}m) for graphs that are k vertex deletions away from any of several such classes, without explicitly computing an optimal or approximate deletion set; before, most such bounds were at least Omega(km). Thus, any phase-based matching algorithm with linear-time phases obliviously interpolates between linear time for k=O(1) and the worst case of O(sqrt{n}m) when k=Theta(n). We complement our findings by proving that the phase framework by itself still allows Omega(sqrt{n}) phases, and hence time Omega(sqrt{n}m), even on paths, cographs, and bipartite chain graphs.

Cite as

Falko Hegerfeld and Stefan Kratsch. On Adaptive Algorithms for Maximum Matching. In 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 132, pp. 71:1-71:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hegerfeld_et_al:LIPIcs.ICALP.2019.71,
  author =	{Hegerfeld, Falko and Kratsch, Stefan},
  title =	{{On Adaptive Algorithms for Maximum Matching}},
  booktitle =	{46th International Colloquium on Automata, Languages, and Programming (ICALP 2019)},
  pages =	{71:1--71:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-109-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{132},
  editor =	{Baier, Christel and Chatzigiannakis, Ioannis and Flocchini, Paola and Leonardi, Stefano},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2019.71},
  URN =		{urn:nbn:de:0030-drops-106477},
  doi =		{10.4230/LIPIcs.ICALP.2019.71},
  annote =	{Keywords: Matchings, Adaptive Analysis, Parameterized Complexity}
}
Document
On Kernelization for Edge Dominating Set under Structural Parameters

Authors: Eva-Maria C. Hols and Stefan Kratsch

Published in: LIPIcs, Volume 126, 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)


Abstract
In the NP-hard Edge Dominating Set problem (EDS) we are given a graph G=(V,E) and an integer k, and need to determine whether there is a set F subseteq E of at most k edges that are incident with all (other) edges of G. It is known that this problem is fixed-parameter tractable and admits a polynomial kernelization when parameterized by k. A caveat for this parameter is that it needs to be large, i.e., at least equal to half the size of a maximum matching of G, for instances not to be trivially negative. Motivated by this, we study the existence of polynomial kernelizations for EDS when parameterized by structural parameters that may be much smaller than k. Unfortunately, at first glance this looks rather hopeless: Even when parameterized by the deletion distance to a disjoint union of paths P_3 of length two there is no polynomial kernelization (under standard assumptions), ruling out polynomial kernelizations for many smaller parameters like the feedback vertex set size. In contrast, somewhat surprisingly, there is a polynomial kernelization for deletion distance to a disjoint union of paths P_5 of length four. As our main result, we fully classify for all finite sets H of graphs, whether a kernel size polynomial in |X| is possible when given X such that each connected component of G-X is isomorphic to a graph in H.

Cite as

Eva-Maria C. Hols and Stefan Kratsch. On Kernelization for Edge Dominating Set under Structural Parameters. In 36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 126, pp. 36:1-36:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.STACS.2019.36,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan},
  title =	{{On Kernelization for Edge Dominating Set under Structural Parameters}},
  booktitle =	{36th International Symposium on Theoretical Aspects of Computer Science (STACS 2019)},
  pages =	{36:1--36:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-100-9},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{126},
  editor =	{Niedermeier, Rolf and Paul, Christophe},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2019.36},
  URN =		{urn:nbn:de:0030-drops-102752},
  doi =		{10.4230/LIPIcs.STACS.2019.36},
  annote =	{Keywords: Edge dominating set, kernelization, structural parameters}
}
Document
Multi-Budgeted Directed Cuts

Authors: Stefan Kratsch, Shaohua Li, Dániel Marx, Marcin Pilipczuk, and Magnus Wahlström

Published in: LIPIcs, Volume 115, 13th International Symposium on Parameterized and Exact Computation (IPEC 2018)


Abstract
In this paper, we study multi-budgeted variants of the classic minimum cut problem and graph separation problems that turned out to be important in parameterized complexity: Skew Multicut and Directed Feedback Arc Set. In our generalization, we assign colors 1,2,...,l to some edges and give separate budgets k_1,k_2,...,k_l for colors 1,2,...,l. For every color i in {1,...,l}, let E_i be the set of edges of color i. The solution C for the multi-budgeted variant of a graph separation problem not only needs to satisfy the usual separation requirements (i.e., be a cut, a skew multicut, or a directed feedback arc set, respectively), but also needs to satisfy that |C cap E_i| <= k_i for every i in {1,...,l}. Contrary to the classic minimum cut problem, the multi-budgeted variant turns out to be NP-hard even for l = 2. We propose FPT algorithms parameterized by k=k_1 +...+ k_l for all three problems. To this end, we develop a branching procedure for the multi-budgeted minimum cut problem that measures the progress of the algorithm not by reducing k as usual, by but elevating the capacity of some edges and thus increasing the size of maximum source-to-sink flow. Using the fact that a similar strategy is used to enumerate all important separators of a given size, we merge this process with the flow-guided branching and show an FPT bound on the number of (appropriately defined) important multi-budgeted separators. This allows us to extend our algorithm to the Skew Multicut and Directed Feedback Arc Set problems. Furthermore, we show connections of the multi-budgeted variants with weighted variants of the directed cut problems and the Chain l-SAT problem, whose parameterized complexity remains an open problem. We show that these problems admit a bounded-in-parameter number of "maximally pushed" solutions (in a similar spirit as important separators are maximally pushed), giving somewhat weak evidence towards their tractability.

Cite as

Stefan Kratsch, Shaohua Li, Dániel Marx, Marcin Pilipczuk, and Magnus Wahlström. Multi-Budgeted Directed Cuts. In 13th International Symposium on Parameterized and Exact Computation (IPEC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 115, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.IPEC.2018.18,
  author =	{Kratsch, Stefan and Li, Shaohua and Marx, D\'{a}niel and Pilipczuk, Marcin and Wahlstr\"{o}m, Magnus},
  title =	{{Multi-Budgeted Directed Cuts}},
  booktitle =	{13th International Symposium on Parameterized and Exact Computation (IPEC 2018)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-084-2},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{115},
  editor =	{Paul, Christophe and Pilipczuk, Michal},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2018.18},
  URN =		{urn:nbn:de:0030-drops-102194},
  doi =		{10.4230/LIPIcs.IPEC.2018.18},
  annote =	{Keywords: important separators, multi-budgeted cuts, Directed Feedback Vertex Set, fixed-parameter tractability, minimum cut}
}
Document
Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices (Dagstuhl Seminar 18281)

Authors: Jérémy Barbay, Johannes Fischer, Stefan Kratsch, and Srinivasa Rao Satti

Published in: Dagstuhl Reports, Volume 8, Issue 7 (2019)


Abstract
From the 8th of July 2018 to the 13th of July 2018, a Dagstuhl Seminar took place with the topic "Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices". There, 40 participants from as many as 14 distinct countries and four distinct research areas, dealing with running time analysis and space usage analysis of algorithms and data structures, gathered to discuss results and techniques to "go beyond the worst-case" for classes of structurally restricted inputs, both for (fast) algorithms and (compressed) data structures. The seminar consisted of (1) a first session of personal introduction, each participant presenting his expertise and themes of interests in two slides; (2) a series of four technical talks; and (3) a larger series of presentations of open problems, with ample time left for the participants to gather and work on such open problems.

Cite as

Jérémy Barbay, Johannes Fischer, Stefan Kratsch, and Srinivasa Rao Satti. Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices (Dagstuhl Seminar 18281). In Dagstuhl Reports, Volume 8, Issue 7, pp. 44-61, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{barbay_et_al:DagRep.8.7.44,
  author =	{Barbay, J\'{e}r\'{e}my and Fischer, Johannes and Kratsch, Stefan and Satti, Srinivasa Rao},
  title =	{{Synergies between Adaptive Analysis of Algorithms, Parameterized Complexity, Compressed Data Structures and Compressed Indices (Dagstuhl Seminar 18281)}},
  pages =	{44--61},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{8},
  number =	{7},
  editor =	{Barbay, J\'{e}r\'{e}my and Fischer, Johannes and Kratsch, Stefan and Satti, Srinivasa Rao},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.8.7.44},
  URN =		{urn:nbn:de:0030-drops-101724},
  doi =		{10.4230/DagRep.8.7.44},
  annote =	{Keywords: adaptive (analysis of) algorithms, compressed data structures, compressed indices, parameterized complexity}
}
Document
Efficient and Adaptive Parameterized Algorithms on Modular Decompositions

Authors: Stefan Kratsch and Florian Nelles

Published in: LIPIcs, Volume 112, 26th Annual European Symposium on Algorithms (ESA 2018)


Abstract
We study the influence of a graph parameter called modular-width on the time complexity for optimally solving well-known polynomial problems such as Maximum Matching, Triangle Counting, and Maximum s-t Vertex-Capacitated Flow. The modular-width of a graph depends on its (unique) modular decomposition tree, and can be computed in linear time O(n+m) for graphs with n vertices and m edges. Modular decompositions are an important tool for graph algorithms, e.g., for linear-time recognition of certain graph classes. Throughout, we obtain efficient parameterized algorithms of running times O(f(mw)n+m), O(n+f(mw)m) , or O(f(mw)+n+m) for low polynomial functions f and graphs of modular-width mw. Our algorithm for Maximum Matching, running in time O(mw^2 log mw n+m), is both faster and simpler than the recent O(mw^4n+m) time algorithm of Coudert et al. (SODA 2018). For several other problems, e.g., Triangle Counting and Maximum b-Matching, we give adaptive algorithms, meaning that their running times match the best unparameterized algorithms for worst-case modular-width of mw=Theta(n) and they outperform them already for mw=o(n), until reaching linear time for mw=O(1).

Cite as

Stefan Kratsch and Florian Nelles. Efficient and Adaptive Parameterized Algorithms on Modular Decompositions. In 26th Annual European Symposium on Algorithms (ESA 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 112, pp. 55:1-55:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.ESA.2018.55,
  author =	{Kratsch, Stefan and Nelles, Florian},
  title =	{{Efficient and Adaptive Parameterized Algorithms on Modular Decompositions}},
  booktitle =	{26th Annual European Symposium on Algorithms (ESA 2018)},
  pages =	{55:1--55:15},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-081-1},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{112},
  editor =	{Azar, Yossi and Bast, Hannah and Herman, Grzegorz},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2018.55},
  URN =		{urn:nbn:de:0030-drops-95187},
  doi =		{10.4230/LIPIcs.ESA.2018.55},
  annote =	{Keywords: efficient parameterized algorithms, modular-width, adaptive algorithms}
}
Document
Smaller Parameters for Vertex Cover Kernelization

Authors: Eva-Maria C. Hols and Stefan Kratsch

Published in: LIPIcs, Volume 89, 12th International Symposium on Parameterized and Exact Computation (IPEC 2017)


Abstract
We revisit the topic of polynomial kernels for Vertex Cover relative to structural parameters. Our starting point is a recent paper due to Fomin and Strømme [WG 2016] who gave a kernel with O(|X|^{12}) vertices when X is a vertex set such that each connected component of G-X contains at most one cycle, i.e., X is a modulator to a pseudoforest. We strongly generalize this result by using modulators to d-quasi-forests, i.e., graphs where each connected component has a feedback vertex set of size at most d, and obtain kernels with O(|X|^{3d+9}) vertices. Our result relies on proving that minimal blocking sets in a d-quasi-forest have size at most d+2. This bound is tight and there is a related lower bound of O(|X|^{d+2-epsilon}) on the bit size of kernels. In fact, we also get bounds for minimal blocking sets of more general graph classes: For d-quasi-bipartite graphs, where each connected component can be made bipartite by deleting at most d vertices, we get the same tight bound of d+2 vertices. For graphs whose connected components each have a vertex cover of cost at most d more than the best fractional vertex cover, which we call d-quasi-integral, we show that minimal blocking sets have size at most 2d+2, which is also tight. Combined with existing randomized polynomial kernelizations this leads to randomized polynomial kernelizations for modulators to d-quasi-bipartite and d-quasi-integral graphs. There are lower bounds of O(|X|^{d+2-epsilon}) and O(|X|^{2d+2-epsilon}) for the bit size of such kernels.

Cite as

Eva-Maria C. Hols and Stefan Kratsch. Smaller Parameters for Vertex Cover Kernelization. In 12th International Symposium on Parameterized and Exact Computation (IPEC 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 89, pp. 20:1-20:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.IPEC.2017.20,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan},
  title =	{{Smaller Parameters for Vertex Cover Kernelization}},
  booktitle =	{12th International Symposium on Parameterized and Exact Computation (IPEC 2017)},
  pages =	{20:1--20:12},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-051-4},
  ISSN =	{1868-8969},
  year =	{2018},
  volume =	{89},
  editor =	{Lokshtanov, Daniel and Nishimura, Naomi},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2017.20},
  URN =		{urn:nbn:de:0030-drops-85638},
  doi =		{10.4230/LIPIcs.IPEC.2017.20},
  annote =	{Keywords: Vertex Cover, Kernelization, Structural Parameterization}
}
Document
Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting

Authors: Toni Böhnlein, Stefan Kratsch, and Oliver Schaudt

Published in: LIPIcs, Volume 80, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)


Abstract
In a Stackelberg Pricing Game a distinguished player, the leader, chooses prices for a set of items, and the other players, the followers, each seeks to buy a minimum cost feasible subset of the items. The goal of the leader is to maximize her revenue, which is determined by the sold items and their prices. Most previously studied cases of such games can be captured by a combinatorial model where we have a base set of items, some with fixed prices, some priceable, and constraints on the subsets that are feasible for each follower. In this combinatorial setting, Briest et al. and Balcan et al. independently showed that the maximum revenue can be approximated to a factor of H_k ~ log(k), where k is the number of priceable items. Our results are twofold. First, we strongly generalize the model by letting the follower minimize any continuous function plus a linear term over any compact subset of R_(n>=0); the coefficients (or prices) in the linear term are chosen by the leader and determine her revenue. In particular, this includes the fundamental case of linear programs. We give a tight lower bound on the revenue of the leader, generalizing the results of Briest et al. and Balcan et al. Besides, we prove that it is strongly NP-hard to decide whether the optimum revenue exceeds the lower bound by an arbitrarily small factor. Second, we study the parameterized complexity of computing the optimal revenue with respect to the number k of priceable items. In the combinatorial setting, given an efficient algorithm for optimal follower solutions, the maximum revenue can be found by enumerating the 2^k subsets of priceable items and computing optimal prices via a result of Briest et al., giving time O(2^k|I|^c ) where |I| is the input size. Our main result here is a W[1]-hardness proof for the case where the followers minimize a linear program, ruling out running time f(k)|I|^c unless FPT = W[1] and ruling out time |I|^o(k) under the Exponential-Time Hypothesis.

Cite as

Toni Böhnlein, Stefan Kratsch, and Oliver Schaudt. Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting. In 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 80, pp. 46:1-46:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{bohnlein_et_al:LIPIcs.ICALP.2017.46,
  author =	{B\"{o}hnlein, Toni and Kratsch, Stefan and Schaudt, Oliver},
  title =	{{Revenue Maximization in Stackelberg Pricing Games: Beyond the Combinatorial Setting}},
  booktitle =	{44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)},
  pages =	{46:1--46:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-041-5},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{80},
  editor =	{Chatzigiannakis, Ioannis and Indyk, Piotr and Kuhn, Fabian and Muscholl, Anca},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2017.46},
  URN =		{urn:nbn:de:0030-drops-73771},
  doi =		{10.4230/LIPIcs.ICALP.2017.46},
  annote =	{Keywords: Algorithmic pricing, Stackelberg games, Approximation algorithms, Rev- enue maximization, Parameterized complexity}
}
Document
The Parameterized Complexity of Finding a 2-Sphere in a Simplicial Complex

Authors: Benjamin Burton, Sergio Cabello, Stefan Kratsch, and William Pettersson

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
We consider the problem of finding a subcomplex K' of a simplicial complex K such that K' is homeomorphic to the 2-dimensional sphere, S^2. We study two variants of this problem. The first asks if there exists such a K' with at most k triangles, and we show that this variant is W[1]-hard and, assuming ETH, admits no O(n^(o(sqrt(k)))) time algorithm. We also give an algorithm that is tight with regards to this lower bound. The second problem is the dual of the first, and asks if K' can be found by removing at most k triangles from K. This variant has an immediate O(3^k poly(|K|)) time algorithm, and we show that it admits a polynomial kernelization to O(k^2) triangles, as well as a polynomial compression to a weighted version with bit-size O(k log k).

Cite as

Benjamin Burton, Sergio Cabello, Stefan Kratsch, and William Pettersson. The Parameterized Complexity of Finding a 2-Sphere in a Simplicial Complex. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 18:1-18:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{burton_et_al:LIPIcs.STACS.2017.18,
  author =	{Burton, Benjamin and Cabello, Sergio and Kratsch, Stefan and Pettersson, William},
  title =	{{The Parameterized Complexity of Finding a 2-Sphere in a Simplicial Complex}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{18:1--18:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.18},
  URN =		{urn:nbn:de:0030-drops-70156},
  doi =		{10.4230/LIPIcs.STACS.2017.18},
  annote =	{Keywords: computational topology, parameterized complexity, simplicial complex}
}
Document
Robust and Adaptive Search

Authors: Yann Disser and Stefan Kratsch

Published in: LIPIcs, Volume 66, 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)


Abstract
Binary search finds a given element in a sorted array with an optimal number of log n queries. However, binary search fails even when the array is only slightly disordered or access to its elements is subject to errors. We study the worst-case query complexity of search algorithms that are robust to imprecise queries and that adapt to perturbations of the order of the elements. We give (almost) tight results for various parameters that quantify query errors and that measure array disorder. In particular, we exhibit settings where query complexities of log n + ck, (1+epsilon) log n + ck, and sqrt(cnk)+o(nk) are best-possible for parameter value k, any epsilon > 0, and constant c.

Cite as

Yann Disser and Stefan Kratsch. Robust and Adaptive Search. In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 26:1-26:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)


Copy BibTex To Clipboard

@InProceedings{disser_et_al:LIPIcs.STACS.2017.26,
  author =	{Disser, Yann and Kratsch, Stefan},
  title =	{{Robust and Adaptive Search}},
  booktitle =	{34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
  pages =	{26:1--26:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-028-6},
  ISSN =	{1868-8969},
  year =	{2017},
  volume =	{66},
  editor =	{Vollmer, Heribert and Vall\'{e}e, Brigitte},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.26},
  URN =		{urn:nbn:de:0030-drops-70077},
  doi =		{10.4230/LIPIcs.STACS.2017.26},
  annote =	{Keywords: searching, robustness, adaptive algorithms, memory faults, array disorder}
}
Document
Preprocessing Under Uncertainty: Matroid Intersection

Authors: Stefan Fafianie, Eva-Maria C. Hols, Stefan Kratsch, and Vuong Anh Quyen

Published in: LIPIcs, Volume 58, 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)


Abstract
We continue the study of preprocessing under uncertainty that was initiated independently by Assadi et al. (FSTTCS 2015) and Fafianie et al. (STACS 2016). Here, we are given an instance of a tractable problem with a large static/known part and a small part that is dynamic/uncertain, and ask if there is an efficient algorithm that computes an instance of size polynomial in the uncertain part of the input, from which we can extract an optimal solution to the original instance for all (usually exponentially many) instantiations of the uncertain part. In the present work, we focus on the Matroid Intersection problem. Amongst others we present a positive preprocessing result for the important case of finding a largest common independent set in two linear matroids. Motivated by an application for intersecting two gammoids we also revisit Maximum Flow. There we tighten a lower bound of Assadi et al. and give an alternative positive result for the case of low uncertain capacity that yields a Maximum Flow instance as output rather than a matrix.

Cite as

Stefan Fafianie, Eva-Maria C. Hols, Stefan Kratsch, and Vuong Anh Quyen. Preprocessing Under Uncertainty: Matroid Intersection. In 41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 58, pp. 35:1-35:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fafianie_et_al:LIPIcs.MFCS.2016.35,
  author =	{Fafianie, Stefan and Hols, Eva-Maria C. and Kratsch, Stefan and Quyen, Vuong Anh},
  title =	{{Preprocessing Under Uncertainty: Matroid Intersection}},
  booktitle =	{41st International Symposium on Mathematical Foundations of Computer Science (MFCS 2016)},
  pages =	{35:1--35:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-016-3},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{58},
  editor =	{Faliszewski, Piotr and Muscholl, Anca and Niedermeier, Rolf},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.MFCS.2016.35},
  URN =		{urn:nbn:de:0030-drops-64490},
  doi =		{10.4230/LIPIcs.MFCS.2016.35},
  annote =	{Keywords: preprocessing, uncertainty, maximum flow, matroid intersection}
}
Document
A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter

Authors: Stefan Kratsch

Published in: LIPIcs, Volume 57, 24th Annual European Symposium on Algorithms (ESA 2016)


Abstract
In the Vertex Cover problem we are given a graph G=(V,E) and an integer k and have to determine whether there is a set X subseteq V of size at most k such that each edge in E has at least one endpoint in X. The problem can be easily solved in time O^*(2^k), making it fixed-parameter tractable (FPT) with respect to k. While the fastest known algorithm takes only time O^*(1.2738^k), much stronger improvements have been obtained by studying parameters that are smaller than k. Apart from treewidth-related results, the arguably best algorithm for Vertex Cover runs in time O^*(2.3146^p), where p = k - LP(G) is only the excess of the solution size k over the best fractional vertex cover [Lokshtanov et al., TALG 2014]. Since p <= k but k cannot be bounded in terms of p alone, this strictly increases the range of tractable instances. Recently, Garg and Philip (SODA 2016) greatly contributed to understanding the parameterized complexity of the Vertex Cover problem. They prove that 2LP(G) - MM(G) is a lower bound for the vertex cover size of G, where MM(G) is the size of a largest matching of G, and proceed to study parameter l = k - (2LP(G)-MM(G)). They give an algorithm of running time O^*(3^l), proving that Vertex Cover is FPT in l. It can be easily observed that l <= p whereas p cannot be bounded in terms of l alone. We complement the work of Garg and Philip by proving that Vertex Cover admits a randomized polynomial kernelization in terms of l, i.e., an efficient preprocessing to size polynomial in l. This improves over parameter p = k - LP(G) for which this was previously known [Kratsch and Wahlström, FOCS 2012].

Cite as

Stefan Kratsch. A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter. In 24th Annual European Symposium on Algorithms (ESA 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 57, pp. 59:1-59:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{kratsch:LIPIcs.ESA.2016.59,
  author =	{Kratsch, Stefan},
  title =	{{A Randomized Polynomial Kernelization for Vertex Cover with a Smaller Parameter}},
  booktitle =	{24th Annual European Symposium on Algorithms (ESA 2016)},
  pages =	{59:1--59:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-015-6},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{57},
  editor =	{Sankowski, Piotr and Zaroliagis, Christos},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2016.59},
  URN =		{urn:nbn:de:0030-drops-64066},
  doi =		{10.4230/LIPIcs.ESA.2016.59},
  annote =	{Keywords: Vertex cover, parameterized complexity, kernelization}
}
Document
Preprocessing Under Uncertainty

Authors: Stefan Fafianie, Stefan Kratsch, and Vuong Anh Quyen

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
In this work we study preprocessing for tractable problems when part of the input is unknown or uncertain. This comes up naturally if, e.g., the load of some machines or the congestion of some roads is not known far enough in advance, or if we have to regularly solve a problem over instances that are largely similar, e.g., daily airport scheduling with few charter flights. Unlike robust optimization, which also studies settings like this, our goal lies not in computing solutions that are (approximately) good for every instantiation. Rather, we seek to preprocess the known parts of the input, to speed up finding an optimal solution once the missing data is known. We present efficient algorithms that given an instance with partially uncertain input generate an instance of size polynomial in the amount of uncertain data that is equivalent for every instantiation of the unknown part. Concretely, we obtain such algorithms for minimum spanning tree, minimum weight matroid basis, and maximum cardinality bipartite matching, where respectively the weight of edges, weight of elements, and the availability of vertices is unknown for part of the input. Furthermore, we show that there are tractable problems, such as small connected vertex cover, for which one cannot hope to obtain similar results.

Cite as

Stefan Fafianie, Stefan Kratsch, and Vuong Anh Quyen. Preprocessing Under Uncertainty. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 33:1-33:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{fafianie_et_al:LIPIcs.STACS.2016.33,
  author =	{Fafianie, Stefan and Kratsch, Stefan and Anh Quyen, Vuong},
  title =	{{Preprocessing Under Uncertainty}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{33:1--33:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.33},
  URN =		{urn:nbn:de:0030-drops-57340},
  doi =		{10.4230/LIPIcs.STACS.2016.33},
  annote =	{Keywords: preprocessing, uncertainty, spanning trees, matroids, matchings}
}
Document
A Randomized Polynomial Kernel for Subset Feedback Vertex Set

Authors: Eva-Maria C. Hols and Stefan Kratsch

Published in: LIPIcs, Volume 47, 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)


Abstract
The SUBSET FEEDBACK VERTEX SET problem generalizes the classical FEEDBACK VERTEX SET problem and asks, for a given undirected graph G=(V,E), a set S subseteq V, and an integer k, whether there exists a set X of at most k vertices such that no cycle in G-X contains a vertex of S. It was independently shown by Cygan et al. (ICALP'11, SIDMA'13) and Kawarabayashi and Kobayashi (JCTB'12) that SUBSET FEEDBACK VERTEX SET is fixed-parameter tractable for parameter k. Cygan et al. asked whether the problem also admits a polynomial kernelization. We answer the question of Cygan et al. positively by giving a randomized polynomial kernelization for the equivalent version where S is a set of edges. In a first step we show that EDGE SUBSET FEEDBACK VERTEX SET has a randomized polynomial kernel parameterized by |S|+k with O(|S|^2k) vertices. For this we use the matroid-based tools of Kratsch and Wahlstrom (FOCS'12). Next we present a preprocessing that reduces the given instance (G,S,k) to an equivalent instance (G',S',k') where the size of S' is bounded by O(k^4). These two results lead to a polynomial kernel for SUBSET FEEDBACK VERTEX SET with O(k^9) vertices.

Cite as

Eva-Maria C. Hols and Stefan Kratsch. A Randomized Polynomial Kernel for Subset Feedback Vertex Set. In 33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016). Leibniz International Proceedings in Informatics (LIPIcs), Volume 47, pp. 43:1-43:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@InProceedings{hols_et_al:LIPIcs.STACS.2016.43,
  author =	{Hols, Eva-Maria C. and Kratsch, Stefan},
  title =	{{A Randomized Polynomial Kernel for Subset Feedback Vertex Set}},
  booktitle =	{33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
  pages =	{43:1--43:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-001-9},
  ISSN =	{1868-8969},
  year =	{2016},
  volume =	{47},
  editor =	{Ollinger, Nicolas and Vollmer, Heribert},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2016.43},
  URN =		{urn:nbn:de:0030-drops-57448},
  doi =		{10.4230/LIPIcs.STACS.2016.43},
  annote =	{Keywords: parameterized complexity, kernelization, subset feedback vertex set}
}
Document
The Parameterized Complexity of the Minimum Shared Edges Problem

Authors: Till Fluschnik, Stefan Kratsch, Rolf Niedermeier, and Manuel Sorge

Published in: LIPIcs, Volume 45, 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)


Abstract
We study the NP-complete Minimum Shared Edges (MSE) problem. Given an undirected graph, a source and a sink vertex, and two integers p and k, the question is whether there are p paths in the graph connecting the source with the sink and sharing at most k edges. Herein, an edge is shared if it appears in at least two paths. We show that MSE is W[1]-hard when parameterized by the treewidth of the input graph and the number k of shared edges combined. We show that MSE is fixed-parameter tractable with respect to p, but does not admit a polynomial-size kernel (unless NP is a subset of coNP/poly). In the proof of the fixed-parameter tractability of MSE parameterized by p, we employ the treewidth reduction technique due to Marx, O'Sullivan, and Razgon [ACM TALG 2013].

Cite as

Till Fluschnik, Stefan Kratsch, Rolf Niedermeier, and Manuel Sorge. The Parameterized Complexity of the Minimum Shared Edges Problem. In 35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 45, pp. 448-462, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{fluschnik_et_al:LIPIcs.FSTTCS.2015.448,
  author =	{Fluschnik, Till and Kratsch, Stefan and Niedermeier, Rolf and Sorge, Manuel},
  title =	{{The Parameterized Complexity of the Minimum Shared Edges Problem}},
  booktitle =	{35th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2015)},
  pages =	{448--462},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-97-2},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{45},
  editor =	{Harsha, Prahladh and Ramalingam, G.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2015.448},
  URN =		{urn:nbn:de:0030-drops-56323},
  doi =		{10.4230/LIPIcs.FSTTCS.2015.448},
  annote =	{Keywords: Parameterized complexity, kernelization, treewidth, treewidth reduction}
}
Document
On Kernelization and Approximation for the Vector Connectivity Problem

Authors: Stefan Kratsch and Manuel Sorge

Published in: LIPIcs, Volume 43, 10th International Symposium on Parameterized and Exact Computation (IPEC 2015)


Abstract
In the Vector Connectivity problem we are given an undirected graph G=(V,E), a demand function phi: V => {0,...,d}, and an integer k. The question is whether there exists a set S of at most k vertices such that every vertex v in V\S has at least phi(v) vertex-disjoint paths to S; this abstractly captures questions about placing servers in a network, or warehouses on a map, relative to demands. The problem is NP-hard already for instances with d=4 (Cicalese et al., Theor. Comput. Sci. 2015), admits a log-factor approximation (Boros et al., Networks 2014), and is fixed-parameter tractable in terms of k (Lokshtanov, unpublished 2014). We prove several results regarding kernelization and approximation for Vector Connectivity and the variant Vector d-Connectivity where the upper bound d on demands is a constant. For Vector d-Connectivity we give a factor d-approximation algorithm and construct a vertex-linear kernelization, i.e., an efficient reduction to an equivalent instance with f(d)k=O(k) vertices. For Vector Connectivity we get a factor opt-approximation and we show that it has no kernelization to size polynomial in k+d unless NP \subseteq coNP/poly, making f(d)\poly(k) optimal for Vector d-Connectivity. Finally, we provide a write-up for fixed-parameter tractability of Vector Connectivity(k) by giving a different algorithm based on matroid intersection.

Cite as

Stefan Kratsch and Manuel Sorge. On Kernelization and Approximation for the Vector Connectivity Problem. In 10th International Symposium on Parameterized and Exact Computation (IPEC 2015). Leibniz International Proceedings in Informatics (LIPIcs), Volume 43, pp. 377-388, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@InProceedings{kratsch_et_al:LIPIcs.IPEC.2015.377,
  author =	{Kratsch, Stefan and Sorge, Manuel},
  title =	{{On Kernelization and Approximation for the Vector Connectivity Problem}},
  booktitle =	{10th International Symposium on Parameterized and Exact Computation (IPEC 2015)},
  pages =	{377--388},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-92-7},
  ISSN =	{1868-8969},
  year =	{2015},
  volume =	{43},
  editor =	{Husfeldt, Thore and Kanj, Iyad},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.IPEC.2015.377},
  URN =		{urn:nbn:de:0030-drops-55985},
  doi =		{10.4230/LIPIcs.IPEC.2015.377},
  annote =	{Keywords: parameterized complexity, kernelization, approximation}
}
Document
Optimality and tight results in parameterized complexity (Dagstuhl Seminar 14451)

Authors: Stefan Kratsch, Daniel Lokshtanov, Dániel Marx, and Peter Rossmanith

Published in: Dagstuhl Reports, Volume 4, Issue 11 (2015)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 14451 "Optimality and tight results in parameterized complexity". Over the last two decades parameterized complexity has become one of the main tools for handling intractable problems. Recently, tools have been developed not only to classify problems, but also to make statements about how close an algorithm is to being optimal with respect to running time. The focus of this seminar is to highlight and discuss recent, relevant results within this optimality framework and discover fruitful research directions. The report contains the abstracts of the results presented at the seminar, as well as a collection of open problems stated at the seminar.

Cite as

Stefan Kratsch, Daniel Lokshtanov, Dániel Marx, and Peter Rossmanith. Optimality and tight results in parameterized complexity (Dagstuhl Seminar 14451). In Dagstuhl Reports, Volume 4, Issue 11, pp. 1-21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2015)


Copy BibTex To Clipboard

@Article{kratsch_et_al:DagRep.4.11.1,
  author =	{Kratsch, Stefan and Lokshtanov, Daniel and Marx, D\'{a}niel and Rossmanith, Peter},
  title =	{{Optimality and tight results in parameterized complexity (Dagstuhl Seminar 14451)}},
  pages =	{1--21},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2015},
  volume =	{4},
  number =	{11},
  editor =	{Kratsch, Stefan and Lokshtanov, Daniel and Marx, D\'{a}niel and Rossmanith, Peter},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/DagRep.4.11.1},
  URN =		{urn:nbn:de:0030-drops-49677},
  doi =		{10.4230/DagRep.4.11.1},
  annote =	{Keywords: Algorithms, parameterized complexity, kernels, width measures, exponential time hypothesis, lower bounds}
}
Document
Tight bounds for Parameterized Complexity of Cluster Editing

Authors: Fedor V. Fomin, Stefan Kratsch, Marcin Pilipczuk, Michal Pilipczuk, and Yngve Villanger

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
In the Correlation Clustering problem, also known as Cluster Editing, we are given an undirected graph G and a positive integer k; the task is to decide whether G can be transformed into a cluster graph, i.e., a disjoint union of cliques, by changing at most k adjacencies, that is, by adding or deleting at most k edges. The motivation of the problem stems from various tasks in computational biology (Ben-Dor et al., Journal of Computational Biology 1999) and machine learning (Bansal et al., Machine Learning 2004). Although in general Correlation Clustering is APX-hard (Charikar et al., FOCS 2003), the version of the problem where the number of cliques may not exceed a prescribed constant p admits a PTAS (Giotis and Guruswami, SODA 2006). We study the parameterized complexity of Correlation Clustering with this restriction on the number of cliques to be created. We give an algorithm that - in time O(2^{O(sqrt{pk})} + n+m) decides whether a graph G on n vertices and m edges can be transformed into a cluster graph with exactly p cliques by changing at most k adjacencies. We complement these algorithmic findings by the following, surprisingly tight lower bound on the asymptotic behavior of our algorithm. We show that unless the Exponential Time Hypothesis (ETH) fails - for any constant 0 <= sigma <= 1, there is p = Theta(k^sigma) such that there is no algorithm deciding in time 2^{o(sqrt{pk})} n^{O(1)} whether an n-vertex graph G can be transformed into a cluster graph with at most p cliques by changing at most k adjacencies. Thus, our upper and lower bounds provide an asymptotically tight analysis of the multivariate parameterized complexity of the problem for the whole range of values of p from constant to a linear function of k.

Cite as

Fedor V. Fomin, Stefan Kratsch, Marcin Pilipczuk, Michal Pilipczuk, and Yngve Villanger. Tight bounds for Parameterized Complexity of Cluster Editing. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 32-43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{fomin_et_al:LIPIcs.STACS.2013.32,
  author =	{Fomin, Fedor V. and Kratsch, Stefan and Pilipczuk, Marcin and Pilipczuk, Michal and Villanger, Yngve},
  title =	{{Tight bounds for Parameterized Complexity of Cluster Editing}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{32--43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.32},
  URN =		{urn:nbn:de:0030-drops-39209},
  doi =		{10.4230/LIPIcs.STACS.2013.32},
  annote =	{Keywords: parameterized complexity, cluster editing, correlation clustering, subexponential algorithms, tight bounds}
}
Document
On Polynomial Kernels for Sparse Integer Linear Programs

Authors: Stefan Kratsch

Published in: LIPIcs, Volume 20, 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)


Abstract
Integer linear programs (ILPs) are a widely applied framework for dealing with combinatorial problems that arise in practice. It is known, e.g., by the success of CPLEX, that preprocessing and simplification can greatly speed up the process of optimizing an ILP. The present work seeks to further the theoretical understanding of preprocessing for ILPs by initiating a rigorous study within the framework of parameterized complexity and kernelization. A famous result of Lenstra (Mathematics of Operations Research, 1983) shows that feasibility of any ILP with n variables and m constraints can be decided in time O(c^{n^3} m^{c'}). Thus, by a folklore argument, any such ILP admits a kernelization to an equivalent instance of size O(c^{n^3}). It is known, that unless \containment and the polynomial hierarchy collapses, no kernelization with size bound polynomial in n is possible. However, this lower bound only applies for the case when constraints may include an arbitrary number of variables since it follows from lower bounds for \sat and \hittingset, whose bounded arity variants admit polynomial kernelizations. We consider the feasibility problem for ILPs Ax <= b where A is an r-row-sparse matrix parameterized by the number of variables. We show that the kernelizability of this problem depends strongly on the range of the variables. If the range is unbounded then this problem does not admit a polynomial kernelization unless \containment. If, on the other hand, the range of each variable is polynomially bounded in n then we do get a polynomial kernelization. Additionally, this holds also for the more general case when the maximum range d is an additional parameter, i.e., the size obtained is polynomial in n+d.

Cite as

Stefan Kratsch. On Polynomial Kernels for Sparse Integer Linear Programs. In 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Leibniz International Proceedings in Informatics (LIPIcs), Volume 20, pp. 80-91, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2013)


Copy BibTex To Clipboard

@InProceedings{kratsch:LIPIcs.STACS.2013.80,
  author =	{Kratsch, Stefan},
  title =	{{On Polynomial Kernels for Sparse Integer Linear Programs}},
  booktitle =	{30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013)},
  pages =	{80--91},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-50-7},
  ISSN =	{1868-8969},
  year =	{2013},
  volume =	{20},
  editor =	{Portier, Natacha and Wilke, Thomas},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2013.80},
  URN =		{urn:nbn:de:0030-drops-39241},
  doi =		{10.4230/LIPIcs.STACS.2013.80},
  annote =	{Keywords: integer linear programs, kernelization, parameterized complexity}
}
Document
Cross-Composition: A New Technique for Kernelization Lower Bounds

Authors: Hans L. Bodlaender, Bart M. P. Jansen, and Stefan Kratsch

Published in: LIPIcs, Volume 9, 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)


Abstract
We introduce a new technique for proving kernelization lower bounds, called cross-composition. A classical problem L cross-composes into a parameterized problem $Q$ if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we show that if an NP-hard problem cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless the polynomial hierarchy collapses. Our technique generalizes and strengthens the recent techniques of using OR-composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Chromatic Number, Clique, and Weighted Feedback Vertex Set do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set.

Cite as

Hans L. Bodlaender, Bart M. P. Jansen, and Stefan Kratsch. Cross-Composition: A New Technique for Kernelization Lower Bounds. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Leibniz International Proceedings in Informatics (LIPIcs), Volume 9, pp. 165-176, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{bodlaender_et_al:LIPIcs.STACS.2011.165,
  author =	{Bodlaender, Hans L. and Jansen, Bart M. P. and Kratsch, Stefan},
  title =	{{Cross-Composition: A New Technique for Kernelization Lower Bounds}},
  booktitle =	{28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)},
  pages =	{165--176},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-25-5},
  ISSN =	{1868-8969},
  year =	{2011},
  volume =	{9},
  editor =	{Schwentick, Thomas and D\"{u}rr, Christoph},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2011.165},
  URN =		{urn:nbn:de:0030-drops-30082},
  doi =		{10.4230/LIPIcs.STACS.2011.165},
  annote =	{Keywords: kernelization, lower bounds, parameterized complexity}
}
Document
Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP

Authors: Stefan Kratsch

Published in: LIPIcs, Volume 3, 26th International Symposium on Theoretical Aspects of Computer Science (2009)


Abstract
The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely~$\textsc{MIN F}^+\Pi_1$ and~$\textsc{MAX NP}$, including the well-known subclass~$\textsc{MAX SNP}$, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in~$\textsc{MAX SNP}$ and~$\textsc{MIN F}^+\Pi_1$ are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al.\ ICALP 2008).

Cite as

Stefan Kratsch. Polynomial Kernelizations for MIN F^+Pi_1 and MAX NP. In 26th International Symposium on Theoretical Aspects of Computer Science. Leibniz International Proceedings in Informatics (LIPIcs), Volume 3, pp. 601-612, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2009)


Copy BibTex To Clipboard

@InProceedings{kratsch:LIPIcs.STACS.2009.1851,
  author =	{Kratsch, Stefan},
  title =	{{Polynomial Kernelizations for MIN F^+Pi\underline1 and MAX NP}},
  booktitle =	{26th International Symposium on Theoretical Aspects of Computer Science},
  pages =	{601--612},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-939897-09-5},
  ISSN =	{1868-8969},
  year =	{2009},
  volume =	{3},
  editor =	{Albers, Susanne and Marion, Jean-Yves},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops-dev.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2009.1851},
  URN =		{urn:nbn:de:0030-drops-18511},
  doi =		{10.4230/LIPIcs.STACS.2009.1851},
  annote =	{Keywords: Parameterized complexity, Kernelization, Approximation algorithms}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail